Goto

Collaborating Authors

 demand response


Turning AI Data Centers into Grid-Interactive Assets: Results from a Field Demonstration in Phoenix, Arizona

Colangelo, Philip, Coskun, Ayse K., Megrue, Jack, Roberts, Ciaran, Sengupta, Shayan, Sivaram, Varun, Tiao, Ethan, Vijaykar, Aroon, Williams, Chris, Wilson, Daniel C., MacFarland, Zack, Dreiling, Daniel, Morey, Nathan, Ratnayake, Anuja, Vairamohan, Baskar

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is fueling exponential electricity demand growth, threatening grid reliability, raising prices for communities paying for new energy infrastructure, and stunting AI innovation as data centers wait for interconnection to constrained grids. This paper presents the first field demonstration, in collaboration with major corporate partners, of a software-only approach--Emerald Conductor--that transforms AI data centers into flexible grid resources that can efficiently and immediately harness existing power systems without massive infrastructure buildout. Conducted at a 256-GPU cluster running representative AI workloads within a commercial, hyperscale cloud data center in Phoenix, Arizona, the trial achieved a 25% reduction in cluster power usage for three hours during peak grid events while maintaining AI quality of service (QoS) guarantees. By orchestrating AI workloads based on real-time grid signals without hardware modifications or energy storage, this platform reimagines data centers as grid-interactive assets that enhance grid reliability, advance affordability, and accelerate AI's development.


Demand Response Optimization MILP Framework for Microgrids with DERs

Babu, K. Victor Sam Moses, Chakraborty, Pratyush, Pal, Mayukha

arXiv.org Artificial Intelligence

The integration of renewable energy sources in microgrids introduces significant operational challenges due to their intermittent nature and the mismatch between generation and demand patterns. Effective demand response (DR) strategies are crucial for maintaining system stability and economic efficiency, particularly in microgrids with high renewable penetration. This paper presents a comprehensive mixed-integer linear programming (MILP) framework for optimizing DR operations in a microgrid with solar generation and battery storage systems. The framework incorporates load classification, dynamic price thresholding, and multi-period coordination for optimal DR event scheduling. Analysis across seven distinct operational scenarios demonstrates consistent peak load reduction of 10\% while achieving energy cost savings ranging from 13.1\% to 38.0\%. The highest performance was observed in scenarios with high solar generation, where the framework achieved 38.0\% energy cost reduction through optimal coordination of renewable resources and DR actions. The results validate the framework's effectiveness in managing diverse operational challenges while maintaining system stability and economic efficiency.


Estimating the Unobservable Components of Electricity Demand Response with Inverse Optimization

Esteban-Perez, Adrian, Bunn, Derek, Ghiassi-Farrokhfal, Yashar

arXiv.org Artificial Intelligence

Understanding and predicting the electricity demand responses to prices are critical activities for system operators, retailers, and regulators. While conventional machine learning and time series analyses have been adequate for the routine demand patterns that have adapted only slowly over many years, the emergence of active consumers with flexible assets such as solar-plus-storage systems, and electric vehicles, introduces new challenges. These active consumers exhibit more complex consumption patterns, the drivers of which are often unobservable to the retailers and system operators. In practice, system operators and retailers can only monitor the net demand (metered at grid connection points), which reflects the overall energy consumption or production exchanged with the grid. As a result, all "behind-the-meter" activities-such as the use of flexibility-remain hidden from these entities. Such behind-the-meter behavior may be controlled by third party agents or incentivized by tariffs; in either case, the retailer's revenue and the system loads would be impacted by these activities behind the meter, but their details can only be inferred. We define the main components of net demand, as baseload, flexible, and self-generation, each having nonlinear responses to market price signals. As flexible demand response and self generation are increasing, this raises a pressing question of whether existing methods still perform well and, if not, whether there is an alternative way to understand and project the unobserved components of behavior. In response to this practical challenge, we evaluate the potential of a data-driven inverse optimization (IO) methodology. This approach characterizes decomposed consumption patterns without requiring direct observation of behind-the-meter behavior or device-level metering [...]


Learning and Optimization for Price-based Demand Response of Electric Vehicle Charging

Gu, Chengyang, Pan, Yuxin, Liu, Ruohong, Chen, Yize

arXiv.org Artificial Intelligence

In the context of charging electric vehicles (EVs), the price-based demand response (PBDR) is becoming increasingly significant for charging load management. Such response usually encourages cost-sensitive customers to adjust their energy demand in response to changes in price for financial incentives. Thus, to model and optimize EV charging, it is important for charging station operator to model the PBDR patterns of EV customers by precisely predicting charging demands given price signals. Then the operator refers to these demands to optimize charging station power allocation policy. The standard pipeline involves offline fitting of a PBDR function based on historical EV charging records, followed by applying estimated EV demands in downstream charging station operation optimization. In this work, we propose a new decision-focused end-to-end framework for PBDR modeling that combines prediction errors and downstream optimization cost errors in the model learning stage. We evaluate the effectiveness of our method on a simulation of charging station operation with synthetic PBDR patterns of EV customers, and experimental results demonstrate that this framework can provide a more reliable prediction model for the ultimate optimization process, leading to more effective optimization solutions in terms of cost savings and charging station operation objectives with only a few training samples.


Contextual Restless Multi-Armed Bandits with Application to Demand Response Decision-Making

Chen, Xin, Hou, I-Hong

arXiv.org Artificial Intelligence

This paper introduces a novel multi-armed bandits framework, termed Contextual Restless Bandits (CRB), for complex online decision-making. This CRB framework incorporates the core features of contextual bandits and restless bandits, so that it can model both the internal state transitions of each arm and the influence of external global environmental contexts. Using the dual decomposition method, we develop a scalable index policy algorithm for solving the CRB problem, and theoretically analyze the asymptotical optimality of this algorithm. In the case when the arm models are unknown, we further propose a model-based online learning algorithm based on the index policy to learn the arm models and make decisions simultaneously. Furthermore, we apply the proposed CRB framework and the index policy algorithm specifically to the demand response decision-making problem in smart grids. The numerical simulations demonstrate the performance and efficiency of our proposed CRB approaches.


Energy Flexibility Potential in the Brewery Sector: A Multi-agent Based Simulation of 239 Danish Breweries

Howard, Daniel Anthony, Ma, Zheng Grace, Engvang, Jacob Alstrup, Hagenau, Morten, Jorgensen, Kathrine Lau, Olesen, Jonas Fausing, Jørgensen, Bo Nørregaard

arXiv.org Artificial Intelligence

The beverage industry is a typical food processing industry and accounts for significant energy consumption, e.g., 1 % of The grid stability and security of supply are challenged Danish energy consumption [10]. The beverage industry can due to the increasing penetration of renewable energy sources be further divided based on the beverage type, with beer in the electricity grid [1]. Furthermore, conventional balancing production being the category with the highest energy of the electricity grid through supply-side management is consumption accounting for 40 % of the beverages industry's becoming costly, and the capacity required to ensure the combined energy consumption [10]. For instance, Denmark security of supply would be inefficient [2]. Demand-side has the highest number of breweries per capita [11] among the management has seen increasing potential to mitigate the European nations. As of April 2022, there were 275 breweries impact of fluctuations in the electricity grid and aid in in Denmark. A survey based on the Danish Brewery stabilization by adjusting consumer demand subject to Associations members shows that approximately 50 % of electricity market conditions [3]. Danish beverage facilities might be permanently close or go Demand side management can be divided based on the bankrupt due to COVID-19 and the increasing energy prices load-shape objective, e.g., peak clipping, valley filling, and [12].


Advancing Attack-Resilient Scheduling of Integrated Energy Systems with Demand Response via Deep Reinforcement Learning

Li, Yang, Ma, Wenjie, Li, Yuanzheng, Li, Sen, Chen, Zhe

arXiv.org Artificial Intelligence

Optimally scheduling multi-energy flow is an effective method to utilize renewable energy sources (RES) and improve the stability and economy of integrated energy systems (IES). However, the stable demand-supply of IES faces challenges from uncertainties that arise from RES and loads, as well as the increasing impact of cyber-attacks with advanced information and communication technologies adoption. To address these challenges, this paper proposes an innovative model-free resilience scheduling method based on state-adversarial deep reinforcement learning (DRL) for integrated demand response (IDR)-enabled IES. The proposed method designs an IDR program to explore the interaction ability of electricity-gas-heat flexible loads. Additionally, a state-adversarial Markov decision process (SA-MDP) model characterizes the energy scheduling problem of IES under cyber-attack. The state-adversarial soft actor-critic (SA-SAC) algorithm is proposed to mitigate the impact of cyber-attacks on the scheduling strategy. Simulation results demonstrate that our method is capable of adequately addressing the uncertainties resulting from RES and loads, mitigating the impact of cyber-attacks on the scheduling strategy, and ensuring a stable demand supply for various energy sources. Moreover, the proposed method demonstrates resilience against cyber-attacks. Compared to the original soft actor-critic (SAC) algorithm, it achieves a 10\% improvement in economic performance under cyber-attack scenarios.


Equitable Time-Varying Pricing Tariff Design: A Joint Learning and Optimization Approach

Chen, Liudong, Xu, Bolun

arXiv.org Artificial Intelligence

Time-varying pricing tariffs incentivize consumers to shift their electricity demand and reduce costs, but may increase the energy burden for consumers with limited response capability. The utility must thus balance affordability and response incentives when designing these tariffs by considering consumers' response expectations. This paper proposes a joint learning-based identification and optimization method to design equitable time-varying tariffs. Our proposed method encodes historical prices and demand response data into a recurrent neural network (RNN) to capture high-dimensional and non-linear consumer price response behaviors. We then embed the RNN into the tariff design optimization, formulating a non-linear optimization problem with a quadratic objective. We propose a gradient-based solution method that achieves fast and scalable computation. Simulation using real-world consumer data shows that our equitable tariffs protect low-income consumers from price surges while effectively motivating consumers to reduce peak demand. The method also ensures revenue recovery for the utility company and achieves robust performance against demand response uncertainties and prediction errors.


Online Learning for Incentive-Based Demand Response

Muthirayan, Deepan, Khargonekar, Pramod P.

arXiv.org Artificial Intelligence

In this paper, we consider the problem of learning online to manage Demand Response (DR) resources. A typical DR mechanism requires the DR manager to assign a baseline to the participating consumer, where the baseline is an estimate of the counterfactual consumption of the consumer had it not been called to provide the DR service. A challenge in estimating baseline is the incentive the consumer has to inflate the baseline estimate. We consider the problem of learning online to estimate the baseline and to optimize the operating costs over a period of time under such incentives. We propose an online learning scheme that employs least-squares for estimation with a perturbation to the reward price (for the DR services or load curtailment) that is designed to balance the exploration and exploitation trade-off that arises with online learning. We show that, our proposed scheme is able to achieve a very low regret of $\mathcal{O}\left((\log{T})^2\right)$ with respect to the optimal operating cost over $T$ days of the DR program with full knowledge of the baseline, and is individually rational for the consumers to participate. Our scheme is significantly better than the averaging type approach, which only fetches $\mathcal{O}(T^{1/3})$ regret.


A Novel Demand Response Model and Method for Peak Reduction in Smart Grids -- PowerTAC

Chandlekar, Sanjay, Boroju, Arthik, Jain, Shweta, Gujar, Sujit

arXiv.org Artificial Intelligence

One of the widely used peak reduction methods in smart grids is demand response, where one analyzes the shift in customers' (agents') usage patterns in response to the signal from the distribution company. Often, these signals are in the form of incentives offered to agents. This work studies the effect of incentives on the probabilities of accepting such offers in a real-world smart grid simulator, PowerTAC. We first show that there exists a function that depicts the probability of an agent reducing its load as a function of the discounts offered to them. We call it reduction probability (RP). RP function is further parametrized by the rate of reduction (RR), which can differ for each agent. We provide an optimal algorithm, MJS--ExpResponse, that outputs the discounts to each agent by maximizing the expected reduction under a budget constraint. When RRs are unknown, we propose a Multi-Armed Bandit (MAB) based online algorithm, namely MJSUCB--ExpResponse, to learn RRs. Experimentally we show that it exhibits sublinear regret. Finally, we showcase the efficacy of the proposed algorithm in mitigating demand peaks in a real-world smart grid system using the PowerTAC simulator as a test bed.